561 research outputs found

    Correcting curvature-density effects in the Hamilton-Jacobi skeleton

    Get PDF
    The Hainilton-Jacobi approach has proven to be a powerful and elegant method for extracting the skeleton of two-dimensional (2-D) shapes. The approach is based on the observation that the normalized flux associated with the inward evolution of the object boundary at nonskeletal points tends to zero as the size of the integration area tends to zero, while the flux is negative at the locations of skeletal points. Nonetheless, the error in calculating the flux on the image lattice is both limited by the pixel resolution and also proportional to the curvature of the boundary evolution front and, hence, unbounded near endpoints. This makes the exact location of endpoints difficult and renders the performance of the skeleton extraction algorithm dependent on a threshold parameter. This problem can be overcome by using interpolation techniques to calculate the flux with subpixel precision. However, here, we develop a method for 2-D skeleton extraction that circumvents the problem by eliminating the curvature contribution to the error. This is done by taking into account variations of density due to boundary curvature. This yields a skeletonization algorithm that gives both better localization and less susceptibility to boundary noise and parameter choice than the Hamilton-Jacobi method

    SHREC'16: partial matching of deformable shapes

    Get PDF
    Matching deformable 3D shapes under partiality transformations is a challenging problem that has received limited focus in the computer vision and graphics communities. With this benchmark, we explore and thoroughly investigate the robustness of existing matching methods in this challenging task. Participants are asked to provide a point-to-point correspondence (either sparse or dense) between deformable shapes undergoing different kinds of partiality transformations, resulting in a total of 400 matching problems to be solved for each method - making this benchmark the biggest and most challenging of its kind. Five matching algorithms were evaluated in the contest; this paper presents the details of the dataset, the adopted evaluation measures, and shows thorough comparisons among all competing methods

    You Can't See Me: Anonymizing Graphs Using the Szemerédi Regularity Lemma.

    Get PDF
    Complex networks gathered from our online interactions provide a rich source of information that can be used to try to model and predict our behavior. While this has very tangible benefits that we have all grown accustomed to, there is a concrete privacy risk in sharing potentially sensitive data about ourselves and the people we interact with, especially when this data is publicly available online and unprotected from malicious attacks. k-anonymity is a technique aimed at reducing this risk by obfuscating the topological information of a graph that can be used to infer the nodes' identity. In this paper we propose a novel algorithm to enforce k-anonymity based on a well-known result in extremal graph theory, the Szemerédi regularity lemma. Given a graph, we start by computing a regular partition of its nodes. The Szemerédi regularity lemma ensures that such a partition exists and that the edges between the sets of nodes behave almost randomly. With this partition, we anonymize the graph by randomizing the edges within each set, obtaining a graph that is structurally similar to the original one yet the nodes within each set are structurally indistinguishable. We test the proposed approach on real-world networks extracted from Facebook. Our experimental results show that the proposed approach is able to anonymize a graph while retaining most of its structural information

    Unsupervised Semantic Discovery Through Visual Patterns Detection

    Get PDF
    We propose a new fast fully unsupervised method to discover semantic patterns. Our algorithm is able to hierarchically find visual categories and produce a segmentation mask where previous methods fail. Through the modeling of what is a visual pattern in an image, we introduce the notion of “semantic levels" and devise a conceptual framework along with measures and a dedicated benchmark dataset for future comparisons. Our algorithm is composed by two phases. A filtering phase, which selects semantical hotsposts by means of an accumulator space, then a clustering phase which propagates the semantic properties of the hotspots on a superpixels basis. We provide both qualitative and quantitative experimental validation, achieving optimal results in terms of robustness to noise and semantic consistency. We also made code and dataset publicly available

    k-Anonymity on Graphs using the Szemerédi Regularity Lemma

    Get PDF
    Graph anonymisation aims at reducing the ability of an attacker to identify the nodes of a graph by obfuscating its structural information. In k-anonymity, this means making each node indistinguishable from at least other k-1 nodes. Simply stripping the nodes of a graph of their identifying label is insufficient, as with enough structural knowledge an attacker can still recover the nodes identities. We propose an algorithm to enforce k-anonymity based on the Szemerédi regularity lemma. Given a graph, we start by computing a regular partition of its nodes. The Szemerédi regularity lemma ensures that such a partition exists and that the edges between the sets of nodes behave quasi-randomly. With this partition to hand, we anonymize the graph by randomizing the edges within each set, obtaining a graph that is structurally similar to the original one yet the nodes within each set are structurally indistinguishable. Unlike other k-anonymisation methods, our approach does not consider a single type of attack, but instead it aims to prevent any structure-based de-anonymisation attempt. We test our framework on a wide range of real-world networks and we compare it against another simple yet widely used k-anonymisation technique demonstrating the effectiveness of our approach

    Cylinders extraction in non-oriented point clouds as a clustering problem

    Get PDF
    Finding geometric primitives in 3D point clouds is a fundamental task in many engineering applications such as robotics, autonomous-vehicles and automated industrial inspection. Among all solid shapes, cylinders are frequently found in a variety of scenes, comprising natural or man-made objects. Despite their ubiquitous presence, automated extraction and fitting can become challenging if performed ”in-the-wild”, when the number of primitives is unknown or the point cloud is noisy and not oriented. In this paper we pose the problem of extracting multiple cylinders in a scene by means of a Game-Theoretic inlier selection process exploiting the geometrical relations between pairs of axis candidates. First, we formulate the similarity between two possible cylinders considering the rigid motion aligning the two axes to the same line. This motion is represented with a unitary dual-quaternion so that the distance between two cylinders is induced by the length of the shortest geodesic path in SE(3). Then, a Game-Theoretical process exploits such similarity function to extract sets of primitives maximizing their inner mutual consensus. The outcome of the evolutionary process consists in a probability distribution over the sets of candidates (ie axes), which in turn is used to directly estimate the final cylinder parameters. An extensive experimental section shows that the proposed algorithm offers a high resilience to noise, since the process inherently discards inconsistent data. Compared to other methods, it does not need point normals and does not require a fine tuning of multiple parameters

    Learning Structure from Samples

    Get PDF

    Matching hierarchical structures for shape recognition

    Get PDF
    In this thesis we aim to develop a framework for clustering trees and rep- resenting and learning a generative model of graph structures from a set of training samples. The approach is applied to the problem of the recognition and classification of shape abstracted in terms of its morphological skeleton. We make five contributions. The first is an algorithm to approximate tree edit-distance using relaxation labeling. The second is the introduction of the tree union, a representation capable of representing the modes of structural variation present in a set of trees. The third is an information theoretic approach to learning a generative model of tree structures from a training set. While the skeletal abstraction of shape was chosen mainly as a exper- imental vehicle, we, nonetheless, make some contributions to the fields of skeleton extraction and its graph representation. In particular, our fourth contribution is the development of a skeletonization method that corrects curvature effects in the Hamilton-Jacobi framework, improving its localiza- tion and noise sensitivity. Finally, we propose a shape-measure capable of characterizing shapes abstracted in terms of their skeleton. This measure has a number of interesting properties. In particular, it varies smoothly as the shape is deformed and can be easily computed using the presented skeleton extraction algorithm. Each chapter presents an experimental analysis of the proposed approaches applied to shape recognition problems

    One-Shot HDR Imaging via Stereo PFA Cameras

    Get PDF
    High Dynamic Range (HDR) imaging techniques aim to increase the range of luminance values captured from a scene. The literature counts many approaches to get HDR images out of low-range camera sensors, however most of them rely on multiple acquisitions producing ghosting effects when moving objects are present. In this paper we propose a novel HDR reconstruction method exploiting stereo Polarimetric Filter Array (PFA) cameras to simultaneously capture the scene with different polarized filters, producing intensity attenuations that can be related to the light polarization state. An additional linear polarizer is mounted in front of one of the two cameras, raising the degree of polarization of rays captured by the sensor. This leads to a larger attenuation range between channels regardless the scene lighting condition. By merging the data acquired by the two cameras, we can compute the actual light attenuation observed by a pixel at each channel and derive an equivalent exposure time, producing a HDR picture from a single polarimetric shot. The proposed technique results comparable to classic HDR approaches using multiple exposures, with the advantage of being a one-shot method
    • …
    corecore